40 research outputs found

    Magnification Control in Self-Organizing Maps and Neural Gas

    Get PDF
    We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms: localized learning, concave-convex learning, and winner relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case the results hold only for the one-dimensional case.Comment: 24 pages, 4 figure

    Hierarchical Strategy of Model Partitioning for VLSI-Design Using an Improved Mixture of Experts Approach

    Get PDF
    The partitioning of complex processor models on the gate and register-transfer level for parallel functional simulation based on the clock-cycle algorithm is considered. We introduce a hierarchical partitioning scheme combining various partitioning algorithms in the frame of a competing strategy. Melting together different partitioning results within one level using superpositions we crossover to a mixture of experts one. This approach is improved applying genetic algorithms. In addition we present two new partitioning algorithms both of them taking cones as fundamental units for building partitions

    Hierarchical Model Partitioning for Parallel VLSI-Simulation Using Evolutionary Algorithms improved bei superpositions of partitions

    Get PDF
    Parallelization of VLSI-simulation exploiting model-inherent parallelism is a promising way to accelerate verification processes for whole processor designs. Thereby partitioning of hardware models influences the effciency of following parallel simulations essentially. Based on a formal model of Parallel Cycle Simulation we introduce partition valuation combining communication and load balancing aspects. We choose a 2-level hierarchical partitioning scheme providing a framework for a mixture of experts strategy. Considering a complete model of a PowerPC 604 processor, we demonstrate that Evolutionary Algorithms can be applied successfully to our model partitioning problem on the second hierarchy level, supposing a reduced problem complexity after fast pre-partitioning on the first level. For the first time, we apply superpositions during execution of Evolutionary Algorithms, resulting in a faster decreasing fitness function and an acceleration of population handling

    Input Pruning for Neural Gas Architectures

    No full text
    Hammer B, Villmann T. Input Pruning for Neural Gas Architectures. In: Proc. Of European Symposium on Artificial Neural Networks (ESANN'2001). Brussels, Belgium: D facto publications; 2001: 283-288

    Effizient Klassifizieren und Clustern: Lernparadigmen von Vektorquantisierern

    No full text
    Hammer B, Villmann T. Effizient Klassifizieren und Clustern: Lernparadigmen von Vektorquantisierern. Künstliche Intelligenz. 2006;3(6):5-11

    Generalized Relevance Learning Vector Quantization

    No full text
    Hammer B, Villmann T. Generalized Relevance Learning Vector Quantization. Neural Networks. 2002;15(8-9):1059-1068

    Magnification control for batch neural gas

    No full text
    Hammer B, Hasenfuss A, Villmann T. Magnification control for batch neural gas. Neurocomputing. 2007;70(7-9):1225-1234

    Batch-GRLVQ

    No full text
    Hammer B, Villmann T. Batch-GRLVQ. In: Verleysen M, ed. Proc. Of European Symposium on Artificial Neural Networks (ESANN'2002). Brussels, Belgium: d-side; 2002: 295-300
    corecore